Paper and articles :
- What uncertainties do we need in bayesian learning ? Alex kendall et al 2017 nips
- Computer vision : models, learning and inference : Simon Prince Book
- Pattern Recognition and Machine Learning by Bishop
- https://medium.com/@jdwittenauer/a-sampling-of-monte-carlo-methods-8a37bfd19
- https://sudeepraja.github.io/Bayes/
- https://www.countbayesie.com
- Kingma and Weiling et al 2014 , Auto encoding variational bayes
- Rezende, Mohammed et al ICML 2014 VAE
- Aaron Couville, DLSS 2015 VAE
- Semi Supervised learning Kingma et al.
- Semi Supervised Learning with VAE , github
- Attribute2Image, ICCV 2015
- Ruslan Salakhutidanov et al VAE ICLR 2016
- Deep Convolution Inverse graphic network, tejas d kulkarni et al
- Ian goodfellow 2015 et al VAE
- https://towardsdatascience.com/intuitively-understanding-variational-autoencoders-1bfe67eb5daf
- yarin.co/blog
- Ghahramani, Gal et al 2017 Medical images
- Yarin gall on Bayesian deep learning
- https://alexgkendall.com/computer_vision/bayesian_deep_learning_for_safe_ai/
- DRAW, ICML 2015
- Show, Attend and tell ICML 2015
- attention-mechanism, heuritech
- Spatial Transformer Network
- Reccurent Spatial Transformers
- AlignDRAW
- Attend, Infer, Repeat arxiv 2016.
- Video Generation using Dual Attention ICCV 2017
- Two Stage video Generation
- Adversarial data programming CVPR 2018
- Toward Controlled Generation of Text
- https://jaan.io/what-is-variational-autoencoder-vae-tutorial/
Overview :
prior x likelihood = posterior
VAEs end up generating blurry results because the formulation itself has an averaging effect. Issues : Component Collapse.
In [ ]: